-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Improve handling of arrays in @defer and @stream payloads
#12923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: jerel/stream-support
Are you sure you want to change the base?
Conversation
🦋 Changeset detectedLatest commit: c496039 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
✅ Docs preview has no changesThe preview was not built because there were no changes. Build ID: 65998f1ed1f52cf86e6a07cb |
commit: |
| @@ -0,0 +1,5 @@ | |||
| --- | |||
| "@apollo/client": minor | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm marking this as a minor change. While it does fix the array merging behavior, it's a big enough difference to warrant a minor.
| expect(outgoingRequestSpy).toHaveBeenCalledTimes(2); | ||
| }); | ||
|
|
||
| it.each([["cache-first"], ["no-cache"]] as const)( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were added from #11374 (with some tweaks)
| expect(outgoingRequestSpy).toHaveBeenCalledTimes(2); | ||
| }); | ||
|
|
||
| it.each([["cache-first"], ["no-cache"]] as const)( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added from #11374 (with some tweaks)
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (existing = [], incoming, { field }) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we consider exposing a utility function or helper of some kind to do this kind of thing? At the very least, we probably want to put something in our docs along with @stream that describes how to do this in your own code. This demonstrates that you can maintain the original cached items when using stream instead of default overwriting the array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's document it first and figure out the perfect helper later.
| friendList: [ | ||
| { name: "Luke", id: "1" }, | ||
| { name: "Han", id: "2" }, | ||
| { name: "Leia Cached", id: "3" }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behavior will now rely on a merge function to keep the cached item if the user wishes.
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (_, incoming) => incoming, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this here to silence the warning about data loss, but I think this might happen a lot with @stream since fetches against a @stream field will naturally return a shorter list than already-cached lists since the list is streamed in over time.
Should we consider muting the warning for @stream fields like we do for @defer? Or should we recommend adding these type policies to explicitly opt into the @stream behavior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm more confused as why this would warrant a data loss warning in the first place, at least with the output we have here. It's just overwriting a reference array with a shorter reference array, without any real data loss?
console.warn
Cache data may be lost when replacing the friendList field of a Query object.
This could cause additional (usually avoidable) network requests to fetch data that were otherwise cached.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.friendList field, so InMemoryCache can safely merge these objects:
existing: [
{ __ref: 'Friend:1' },
{ __ref: 'Friend:2' },
{ __ref: 'Friend:3' },
[length]: 3
]
incoming: [ { __ref: 'Friend:1' }, [length]: 1 ]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed in person. We want to silence the warning if the field has an @stream directive on it, otherwise every @stream field will require a field policy which feels broken.
| export class DeepMerger<TContextArgs extends any[] = any[]> { | ||
| constructor( | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs> | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is an internal class, I know we are free to update the API so with that in mind... should reconciler be moved to options instead? Its a bit awkward in places where we need arrayMerge but not reconciler since you have to pass undefined as the first argument.
Thoughts on changing it so we can do:
new DeepMerger({ arrayMerge: 'truncate' });?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use reconciler exactly once in the codebase, so it won't change things around a lot. What do you think about just making it an option?
d071a61 to
096db02
Compare
2a2fbc5 to
9b9d8aa
Compare
096db02 to
b05df03
Compare
b41de5f to
c496039
Compare
| "@apollo/client": minor | ||
| --- | ||
|
|
||
| Fix an issue where deferred payloads that reteurned arrays with fewer items than the original cached array would retain items from the cached array. This change includes `@stream` arrays where stream arrays replace the cached arrays. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Fix an issue where deferred payloads that reteurned arrays with fewer items than the original cached array would retain items from the cached array. This change includes `@stream` arrays where stream arrays replace the cached arrays. | |
| Fix an issue where deferred payloads that returned arrays with fewer items than the original cached array would retain items from the cached array. This change includes `@stream` arrays where stream arrays replace the cached arrays. |
| export class DeepMerger<TContextArgs extends any[] = any[]> { | ||
| constructor( | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs> | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use reconciler exactly once in the codebase, so it won't change things around a lot. What do you think about just making it an option?
| if (isNumericKey) { | ||
| arrayMerge = "combine"; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a workaround that might explode into our face in some way at some point - e.g. when we'd want to truncate deeply nested, but a parent is an array.
Alternative suggestion: WDTY about adding an atPath option into DeepMerger?
The arrayMerge strategy should then only apply at the exact level where the new source is being inserted, with everything else always being treated as combine - and we would also save creating all these deeply nested objects just for the sake of merging, which will be thrown away after the merge anyways.
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (_, incoming) => incoming, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm more confused as why this would warrant a data loss warning in the first place, at least with the output we have here. It's just overwriting a reference array with a shorter reference array, without any real data loss?
console.warn
Cache data may be lost when replacing the friendList field of a Query object.
This could cause additional (usually avoidable) network requests to fetch data that were otherwise cached.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.friendList field, so InMemoryCache can safely merge these objects:
existing: [
{ __ref: 'Friend:1' },
{ __ref: 'Friend:2' },
{ __ref: 'Friend:3' },
[length]: 3
]
incoming: [ { __ref: 'Friend:1' }, [length]: 1 ]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
| { __typename: "Friend", id: "2", name: "Han" }, | ||
| { __typename: "Friend", id: "3", name: "Leia" }, | ||
| ], | ||
| friendList: [{ __typename: "Friend", id: "1", name: "Luke (refetch)" }], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tbh, for a refetch it feels as if it should be merged, not truncated. Or even better, truncated on the last chunk, not the first one. But I don't have a good idea on how to make that distinction.
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (existing = [], incoming, { field }) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's document it first and figure out the perfect helper later.
Fixes #12660
Fixes #11704
Fixes issues where arrays returned in
@deferpayloads would maintain cached items when the defer array was shorter than the cached array. This change also affects@streamarrays so that when a@streamchunk is processed, it truncates the array to the length of the stream payload. This ensures the length of the final result equals the length of the server array, much like it would if a plainrefetchwere issued.